oppenheimer moment
The 'Oppenheimer Moment' That Looms Over Today's AI Leaders
"I always thought AI was going to be way smarter than humans and an existential risk, and that's turning out to be true," Musk said in February, noting he thinks there is a 20% chance of human "annihilation" by AI. While estimates vary, the idea that advanced AI systems could destroy humanity traces back to the origin of many of the labs developing the technology today. In 2015, Altman called the development of superhuman machine intelligence "probably the greatest threat to the continued existence of humanity." Alongside Hassabis and Amodei, he signed a statement in May 2023 declaring that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." "It strikes me as odd that some leaders think that AI can be so brilliant that it will solve the world's problems, using solutions we didn't think of, but not so brilliant that it can't escape whatever control constraints we think of," says Margaret Mitchell, Chief Ethics Scientist at Hugging Face.
Director Christopher Nolan reckons with AI's 'Oppenheimer moment'
When Nolan began working on the movie about the 20th century scientist, he says he had no idea it would be so relevant to this year's tech debate. He frequently discussed AI during his "Oppenheimer" media blitz, and in November, he was awarded the Federation of American Scientists' Public Service Award alongside policymakers working on artificial intelligence, including Sen. Charles E. Schumer (D-N.Y.), Sen. Todd C. Young (R-Ind.)
Christopher Nolan says AI experts face their 'Oppenheimer moment'
The Oppenheimer director, Christopher Nolan, has highlighted the difficulties of applying nuclear weapons-style regulation to artificial intelligence, as he warned that the United Nations had become a "very diminished" force. Nolan told the Guardian J Robert Oppenheimer's call for international control of nuclear weapons had "sort of come true", but there had nonetheless been extensive proliferation of the technology since the "father of the atomic bomb" led the Manhattan project in the second world war. "To look at the international control of nuclear weapons and feel that the same principles could be applied to something that doesn't require massive industrial processes – it's a bit tricky," he said. "International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2bn and used thousands of people across America to build those first bombs. It's reassuringly difficult to make nuclear weapons and so it's relatively easy to spot when a country is doing that. I don't believe any of that applies to AI."